Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lp-gateway: Add multi-router support #1948

Closed
wants to merge 11 commits into from

Conversation

cdamian
Copy link
Contributor

@cdamian cdamian commented Aug 6, 2024

Description

LP Gateway:

Multi-router

Added new extrinsic for adding multi-routers, and 2 new storages for multi routers:

  • one storage that maps a router hash to a router.
    • the Router trait now has a hash() method that allows us to create a hash using certain router fields.
  • one storage that maps a domain to a bounded vec of router hashes.

Inbound/outbound message handing:

Message proofs

The LPEnconding trait has 2 new methods used for:

  • retrieving a message proof from a message proof message, if any.
  • converting a message to a message proof message.

Outbound

The OutboundMessageHandler implementation is now queueing one message using one router and message proofs using the remaining routers.

Inbound

Added 2 new storages for keeping track of inbound messages and message proofs.

The inbound message processing logic is checking that the required number of proofs are received before submitting the message to the InboundMessageHandler.

Checklist:

  • I have added Rust doc comments to structs, enums, traits and functions
  • I have made corresponding changes to the documentation
  • I have performed a self-review of my code
  • I have added tests that prove my fix is effective or that my feature works
  • Invalidate all message proofs if the routers for a domain change

return (Err(Error::<T>::NotEnoughRouters.into()), weight);
}

let expected_proof_count: u32 = (routers_count - 1) as u32;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @hieronx, was wondering if the solidity side already has (testing) support for sending these message proofs. This will be necessary for our integration tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes Solidity has full implementation of the message proofs already. As well as initiate/dispute message recovery.

Copy link
Contributor

@lemunozm lemunozm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments but I think the idea is the correct one!

pallets/liquidity-pools-gateway/src/lib.rs Show resolved Hide resolved
pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
libs/traits/src/liquidity_pools.rs Outdated Show resolved Hide resolved
pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
Comment on lines 660 to 661
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| {
*count += 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could the same router send the proof 3 times and we will count 3?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this current state, yes. Especially given that we can get the same message more than once.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some tests with different nonproof/proof messages to ensure that these storages remain sane, please let me know what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll check in the next review pass!

Copy link
Contributor

@lemunozm lemunozm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the overall flow is the correct one, but I think there are some math issues on it <- it was ok!

Comment on lines 620 to 669
let _ = InboundMessages::<T>::clear(u32::MAX, None);
let _ = InboundMessageProofCount::<T>::clear(u32::MAX, None);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can exceed the block weight here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, should we maybe add a flag that marks this storage for clearing and then attempt to clear this on_idle?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is when the session_id has to the rescue. on_idle should be as small as possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I will wrap up the normal inbound message processing logic to make sure I get it right then I can do another pass to improve storages etc.

pallets/liquidity-pools-gateway/src/mock.rs Outdated Show resolved Hide resolved
libs/traits/src/liquidity_pools.rs Outdated Show resolved Hide resolved
pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
return (Ok(()), weight);
}

if current_message_proof_count == total_expected_proof_count {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cdamian, just one important thing. How here do we know that the counts belong to different routers and not from the same router? I think this is where the concept of "vote" comes in

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the fun part, we do not...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we can use the domain_address though. If it comes via EVM it's most likely the same sender - the gateway router contract deployed on evm. Or maybe I'm mixing things up...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears to me that we cannot rely on the domain_address based on our Slack conversation. We need to assume that in the future each domain has more than one configured router. So I would derive it "special origin" for the router, e.g. in case of Axelar axelar_gateway_precompile::Pallet::<T>::GatewayContract (see my slack message).

Copy link
Contributor

@wischli wischli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good! Of course we need to iron out open questions and apply our current Slack conversation. Really nice work with the UTs!

Comment on lines +82 to +84
pub fn hash(&self) -> T::Hash {
BlakeTwo256::hash(self.evm_chain.encode().as_slice())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be sure: The router hash is solely used locally and not as part of any dispatched message right? Asking because Solidity usually uses keccak256.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is mainly for the LP gateway to map a router hash to a router in storage.

pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
pallets/liquidity-pools-gateway/src/lib.rs Show resolved Hide resolved
let mut router_hashes = Vec::new();

for router in &routers {
router.init().map_err(|_| Error::<T>::RouterInitFailed)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: It would be great to extend the RouterInitFailed error to have some payload such as the router hash. Otherwise it will be hard to know which router caused a failure when running into issues during mutli router setup. E.g.

	pub enum Error<T> {
		/// Router initialization failed.
		RouterInitFailed { router: T::Hash },
		// snip
	}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome suggestion. thank you!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know substrate supported custom content in the error; if it works, amazing!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Substrate did not support this from Genesis but IIRC it was added early last year. However, since we have lacked behind Substrate versions so much, we haven't utilised it yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
}

fn get_expected_message_proof_count() -> u32 {
T::MultiRouterCount::get() - 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, we must use safe math as long as we cannot restrict MultiRouterCount to be at least one.

Suggested change
T::MultiRouterCount::get() - 1
T::MultiRouterCount::get().saturating_sub(1)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, most of the logic for inbound messages is super-dirty, will change all of these once I figure out proof processing accordingly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes absolute sense. Better have something working with some dirt than have a shining non-functional beauty 😅

Comment on lines 670 to 730
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| {
count.ensure_add_assign(1)?;

Ok(*count)
}) {
Ok(r) => r,
Err(e) => return Err(e),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I missing something or can we simplify readability by utilising ??:

Suggested change
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| {
count.ensure_add_assign(1)?;
Ok(*count)
}) {
Ok(r) => r,
Err(e) => return Err(e),
};
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| {
count.ensure_add_assign(1)?;
Ok(*count)
})?;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might drop these fugly try_mutates.

pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
return (Ok(()), weight);
}

if current_message_proof_count == total_expected_proof_count {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears to me that we cannot rely on the domain_address based on our Slack conversation. We need to assume that in the future each domain has more than one configured router. So I would derive it "special origin" for the router, e.g. in case of Axelar axelar_gateway_precompile::Pallet::<T>::GatewayContract (see my slack message).

@@ -544,12 +735,12 @@ pub mod pallet {
/// weight for these operations in the `DispatchResultWithPostInfo`.
fn process_outbound_message(
sender: T::AccountId,
domain: Domain,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Placeholder (might answer itself further down below): I am curios how or where we determine the destination domain instead now?

@cdamian
Copy link
Contributor Author

cdamian commented Aug 8, 2024

Too many conflicts, gonna start fresh from main.

@cdamian cdamian force-pushed the feat/lp-v2-gateway-multi-router branch 5 times, most recently from 3a23458 to 80dbb03 Compare August 9, 2024 17:28
@cdamian cdamian force-pushed the feat/lp-v2-gateway-multi-router branch from 80dbb03 to 9be4c58 Compare August 10, 2024 09:09
}
}

mod four_messages {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it works for 3, can we ensure it will always work for 4?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, we are testing with 2 routers, and the minimum number of messages needed for a successful message submission is 2. So, if it works with 2, it should work with more messages.

Initially, I added tests for up to 4 messages to confirm that duplicate messages are processed accordingly. Still thinking about generating correct expected results so we can slim these tests down a bit.

@cdamian
Copy link
Contributor Author

cdamian commented Aug 13, 2024

Will close this and rebase on #1958 soon.

@cdamian cdamian closed this Aug 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants